Subject: Advance mathematical Applications for Deep Learning II¶

Project_I: Human Emotion Detection¶

Team Members: Virajkumar Tank (101411542)

Problem Statement: I will try to fine-tune a model by changing its parameters, implementing pre-trained model in order to achive higher accuracy and also I will prepare it for emotion detection in video.

Dataset Description: I am going to develop a deep learning neural network model which will detect Emotions of Humans. The dataset contain 35,685 examples of 48x48 pixel gray scale images of faces divided into train and test dataset. The training set includes 28,709 images, the testing set includes 7178 images. Images are categorized based on the emotion shown in the facial expressions (happiness, neutral, sadness, anger, surprise, disgust, fear).

The existing notebook I have used to classify level of water bottle has accuracy of around 60-65% and Convolutional Neural Network(CNN) is being used but accuracy of the model is not upto mark.

Kaggle Dataset Link: https://www.kaggle.com/datasets/ananthu017/emotion-detection-fer/code?resource=download

Existing notebook link: https://www.kaggle.com/code/aryan348/emotion-detection-cnn

I tried to increase the accuracy by fine tunning of CNN model and also tried to capture emotions from a video.

Importing Libraries¶

In [1]:
import os
import random
import zipfile
from PIL import Image
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
import keras
from keras.preprocessing import image
from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Flatten,Dense,Dropout,BatchNormalization
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import load_model
from tensorflow.python.keras import regularizers
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from keras.preprocessing import image

Unzip downloaded file¶

In [2]:
# Unzip the downloaded file
zip_ref = zipfile.ZipFile("archive.zip", "r")
zip_ref.extractall()
zip_ref.close()

Dataset Preparation¶

In [3]:
dataset_path = "/content/archive"

labels = os.listdir(dataset_path)
print(labels)

for label in labels:
    print(os.listdir(os.path.join(dataset_path, label)))    #listing number of folders and sub-folders in dataset
[]
In [4]:
for dirname, _, filenames in os.walk('/kaggle/input'):
    for filename in filenames:
        print(os.path.join(dirname, filename))

# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
In [5]:
train_dir = "/content/archive/train"
test_dir = "../content/archive/test"

Preparing Data using data generator for training data¶

In [7]:
#preparing training data
img_size=48
train_datagen = ImageDataGenerator(      width_shift_range = 0.1,
                                         height_shift_range = 0.1,
                                         horizontal_flip = True,
                                         rescale = 1./255,
                                         validation_split = 0.2
                                        )

train_generator = train_datagen.flow_from_directory(directory = train_dir,
                                                    target_size = (img_size,img_size),
                                                    batch_size = 64,
                                                    color_mode = "grayscale",
                                                    class_mode = "categorical",
                                                    subset = "training"
                                                   )
Found 22968 images belonging to 7 classes.

Preparing Data using data generator for training data¶

In [8]:
#preparing validation dataset
validation_datagen = ImageDataGenerator(rescale = 1./255,
                                         validation_split = 0.2)

validation_generator = validation_datagen.flow_from_directory( directory = test_dir,
                                                              target_size = (img_size,img_size),
                                                              batch_size = 64,
                                                              color_mode = "grayscale",
                                                              class_mode = "categorical",
                                                              subset = "validation"
                                                             )
Found 1432 images belonging to 7 classes.

plotting an image¶

In [9]:
img = Image.open('/content/archive/test/angry/im106.png')
img.show()

image in grayscale

In [10]:
img = load_img('/content/archive/test/angry/im106.png', target_size=(48, 48), color_mode = "grayscale")
img = np.array(img)
plt.imshow(img)
print(img.shape)
(48, 48)

plotting random image from training dataset¶

In [11]:
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
  directory='/content/archive',
  validation_split=0.2,
  subset="training",
  seed=123,
  image_size=(224, 224),
  batch_size=32)
Found 35887 files belonging to 2 classes.
Using 28710 files for training.
In [12]:
# Get a random sample from the training dataset
random_sample = train_ds.take(1)

# Display the images in the random sample
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in random_sample:
    for i in range(8):
        ax = plt.subplot(3, 3, i + 1)
        plt.imshow(images[i].numpy().astype("uint8"))
        plt.axis("off")

Creating and Compling the Model¶

In [13]:
from tensorflow.keras.optimizers import Adam,RMSprop,SGD,Adamax
model1= tf.keras.models.Sequential()
model1.add(Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=(48, 48,1)))
model1.add(Conv2D(64,(3,3), padding='same', activation='relu' ))
model1.add(BatchNormalization())
model1.add(MaxPool2D(pool_size=(2, 2)))
model1.add(Dropout(0.25))

model1.add(Conv2D(128,(5,5), padding='same', activation='relu'))
model1.add(BatchNormalization())
model1.add(MaxPool2D(pool_size=(2, 2)))
model1.add(Dropout(0.25))
    
model1.add(Conv2D(512,(3,3), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model1.add(BatchNormalization())
model1.add(MaxPool2D(pool_size=(2, 2)))
model1.add(Dropout(0.25))

model1.add(Flatten()) 
model1.add(Dense(256,activation = 'relu'))
model1.add(BatchNormalization())
model1.add(Dropout(0.25))
    
model1.add(Dense(512,activation = 'relu'))
model1.add(BatchNormalization())
model1.add(Dropout(0.25))

model1.add(Dense(7, activation='softmax'))

model1.compile(
    optimizer = Adam(lr=0.0001), 
    loss='categorical_crossentropy', 
    metrics=['accuracy']
  )
WARNING:absl:`lr` is deprecated, please use `learning_rate` instead, or use the legacy optimizer, e.g.,tf.keras.optimizers.legacy.Adam.
In [14]:
model1.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 48, 48, 32)        320       
                                                                 
 conv2d_1 (Conv2D)           (None, 48, 48, 64)        18496     
                                                                 
 batch_normalization (BatchN  (None, 48, 48, 64)       256       
 ormalization)                                                   
                                                                 
 max_pooling2d (MaxPooling2D  (None, 24, 24, 64)       0         
 )                                                               
                                                                 
 dropout (Dropout)           (None, 24, 24, 64)        0         
                                                                 
 conv2d_2 (Conv2D)           (None, 24, 24, 128)       204928    
                                                                 
 batch_normalization_1 (Batc  (None, 24, 24, 128)      512       
 hNormalization)                                                 
                                                                 
 max_pooling2d_1 (MaxPooling  (None, 12, 12, 128)      0         
 2D)                                                             
                                                                 
 dropout_1 (Dropout)         (None, 12, 12, 128)       0         
                                                                 
 conv2d_3 (Conv2D)           (None, 12, 12, 512)       590336    
                                                                 
 batch_normalization_2 (Batc  (None, 12, 12, 512)      2048      
 hNormalization)                                                 
                                                                 
 max_pooling2d_2 (MaxPooling  (None, 6, 6, 512)        0         
 2D)                                                             
                                                                 
 dropout_2 (Dropout)         (None, 6, 6, 512)         0         
                                                                 
 flatten (Flatten)           (None, 18432)             0         
                                                                 
 dense (Dense)               (None, 256)               4718848   
                                                                 
 batch_normalization_3 (Batc  (None, 256)              1024      
 hNormalization)                                                 
                                                                 
 dropout_3 (Dropout)         (None, 256)               0         
                                                                 
 dense_1 (Dense)             (None, 512)               131584    
                                                                 
 batch_normalization_4 (Batc  (None, 512)              2048      
 hNormalization)                                                 
                                                                 
 dropout_4 (Dropout)         (None, 512)               0         
                                                                 
 dense_2 (Dense)             (None, 7)                 3591      
                                                                 
=================================================================
Total params: 5,673,991
Trainable params: 5,671,047
Non-trainable params: 2,944
_________________________________________________________________

Fit the Model¶

In [15]:
epochs= 50
batch_size=60
history = model1.fit(x = train_generator,epochs = epochs,validation_data = validation_generator)
Epoch 1/50
359/359 [==============================] - 45s 84ms/step - loss: 3.0613 - accuracy: 0.2375 - val_loss: 2.7137 - val_accuracy: 0.1425
Epoch 2/50
359/359 [==============================] - 27s 74ms/step - loss: 1.9217 - accuracy: 0.3173 - val_loss: 1.9814 - val_accuracy: 0.3003
Epoch 3/50
359/359 [==============================] - 28s 77ms/step - loss: 1.8640 - accuracy: 0.3929 - val_loss: 2.3227 - val_accuracy: 0.3073
Epoch 4/50
359/359 [==============================] - 27s 74ms/step - loss: 1.7673 - accuracy: 0.4407 - val_loss: 1.7744 - val_accuracy: 0.4595
Epoch 5/50
359/359 [==============================] - 27s 75ms/step - loss: 1.7228 - accuracy: 0.4742 - val_loss: 1.5790 - val_accuracy: 0.5126
Epoch 6/50
359/359 [==============================] - 27s 74ms/step - loss: 1.6823 - accuracy: 0.4899 - val_loss: 1.6165 - val_accuracy: 0.5126
Epoch 7/50
359/359 [==============================] - 27s 76ms/step - loss: 1.6302 - accuracy: 0.5133 - val_loss: 1.7272 - val_accuracy: 0.5307
Epoch 8/50
359/359 [==============================] - 26s 74ms/step - loss: 1.6899 - accuracy: 0.5162 - val_loss: 1.7970 - val_accuracy: 0.5168
Epoch 9/50
359/359 [==============================] - 29s 81ms/step - loss: 1.6892 - accuracy: 0.5160 - val_loss: 1.8801 - val_accuracy: 0.4986
Epoch 10/50
359/359 [==============================] - 27s 74ms/step - loss: 1.6137 - accuracy: 0.5300 - val_loss: 1.5849 - val_accuracy: 0.5454
Epoch 11/50
359/359 [==============================] - 28s 77ms/step - loss: 1.5796 - accuracy: 0.5289 - val_loss: 1.4619 - val_accuracy: 0.5622
Epoch 12/50
359/359 [==============================] - 27s 74ms/step - loss: 1.5773 - accuracy: 0.5418 - val_loss: 1.6526 - val_accuracy: 0.5363
Epoch 13/50
359/359 [==============================] - 28s 77ms/step - loss: 1.5818 - accuracy: 0.5434 - val_loss: 1.5680 - val_accuracy: 0.5845
Epoch 14/50
359/359 [==============================] - 27s 75ms/step - loss: 1.5641 - accuracy: 0.5495 - val_loss: 1.5250 - val_accuracy: 0.5517
Epoch 15/50
359/359 [==============================] - 27s 75ms/step - loss: 1.5639 - accuracy: 0.5517 - val_loss: 1.5351 - val_accuracy: 0.5663
Epoch 16/50
359/359 [==============================] - 28s 77ms/step - loss: 1.5936 - accuracy: 0.5546 - val_loss: 2.0013 - val_accuracy: 0.4504
Epoch 17/50
359/359 [==============================] - 27s 75ms/step - loss: 1.6094 - accuracy: 0.5541 - val_loss: 1.5561 - val_accuracy: 0.5698
Epoch 18/50
359/359 [==============================] - 29s 80ms/step - loss: 1.5292 - accuracy: 0.5563 - val_loss: 1.4997 - val_accuracy: 0.5719
Epoch 19/50
359/359 [==============================] - 27s 75ms/step - loss: 1.5246 - accuracy: 0.5625 - val_loss: 1.5688 - val_accuracy: 0.5594
Epoch 20/50
359/359 [==============================] - 28s 77ms/step - loss: 1.5013 - accuracy: 0.5678 - val_loss: 1.5455 - val_accuracy: 0.5733
Epoch 21/50
359/359 [==============================] - 27s 74ms/step - loss: 1.5036 - accuracy: 0.5731 - val_loss: 1.4990 - val_accuracy: 0.5852
Epoch 22/50
359/359 [==============================] - 28s 77ms/step - loss: 1.5069 - accuracy: 0.5773 - val_loss: 1.4098 - val_accuracy: 0.5817
Epoch 23/50
359/359 [==============================] - 27s 75ms/step - loss: 1.4676 - accuracy: 0.5777 - val_loss: 1.4438 - val_accuracy: 0.5943
Epoch 24/50
359/359 [==============================] - 28s 77ms/step - loss: 1.5625 - accuracy: 0.5813 - val_loss: 1.4516 - val_accuracy: 0.5950
Epoch 25/50
359/359 [==============================] - 26s 74ms/step - loss: 1.5540 - accuracy: 0.5762 - val_loss: 1.5253 - val_accuracy: 0.5775
Epoch 26/50
359/359 [==============================] - 27s 76ms/step - loss: 1.5502 - accuracy: 0.5822 - val_loss: 1.5528 - val_accuracy: 0.5803
Epoch 27/50
359/359 [==============================] - 27s 74ms/step - loss: 1.5142 - accuracy: 0.5785 - val_loss: 1.5269 - val_accuracy: 0.5684
Epoch 28/50
359/359 [==============================] - 28s 77ms/step - loss: 1.5665 - accuracy: 0.5799 - val_loss: 1.5397 - val_accuracy: 0.5880
Epoch 29/50
359/359 [==============================] - 26s 74ms/step - loss: 1.4770 - accuracy: 0.5894 - val_loss: 1.4856 - val_accuracy: 0.5845
Epoch 30/50
359/359 [==============================] - 28s 77ms/step - loss: 1.4584 - accuracy: 0.5903 - val_loss: 1.5343 - val_accuracy: 0.5733
Epoch 31/50
359/359 [==============================] - 27s 75ms/step - loss: 1.5127 - accuracy: 0.5898 - val_loss: 1.4891 - val_accuracy: 0.6034
Epoch 32/50
359/359 [==============================] - 28s 77ms/step - loss: 1.4879 - accuracy: 0.5975 - val_loss: 1.5268 - val_accuracy: 0.5943
Epoch 33/50
359/359 [==============================] - 27s 76ms/step - loss: 1.4249 - accuracy: 0.5958 - val_loss: 1.3238 - val_accuracy: 0.6159
Epoch 34/50
359/359 [==============================] - 28s 77ms/step - loss: 1.3916 - accuracy: 0.5997 - val_loss: 1.4273 - val_accuracy: 0.5936
Epoch 35/50
359/359 [==============================] - 27s 74ms/step - loss: 1.3995 - accuracy: 0.5962 - val_loss: 1.3895 - val_accuracy: 0.5964
Epoch 36/50
359/359 [==============================] - 28s 77ms/step - loss: 1.4610 - accuracy: 0.6007 - val_loss: 1.4285 - val_accuracy: 0.6089
Epoch 37/50
359/359 [==============================] - 27s 74ms/step - loss: 1.3971 - accuracy: 0.6002 - val_loss: 1.4999 - val_accuracy: 0.5915
Epoch 38/50
359/359 [==============================] - 28s 77ms/step - loss: 1.4293 - accuracy: 0.6031 - val_loss: 1.4832 - val_accuracy: 0.6047
Epoch 39/50
359/359 [==============================] - 27s 74ms/step - loss: 1.3788 - accuracy: 0.6071 - val_loss: 1.4653 - val_accuracy: 0.6061
Epoch 40/50
359/359 [==============================] - 28s 77ms/step - loss: 1.3906 - accuracy: 0.6062 - val_loss: 1.4458 - val_accuracy: 0.5957
Epoch 41/50
359/359 [==============================] - 27s 74ms/step - loss: 1.3779 - accuracy: 0.6098 - val_loss: 1.3423 - val_accuracy: 0.6020
Epoch 42/50
359/359 [==============================] - 29s 81ms/step - loss: 1.3476 - accuracy: 0.6113 - val_loss: 1.3532 - val_accuracy: 0.5943
Epoch 43/50
359/359 [==============================] - 36s 102ms/step - loss: 1.3615 - accuracy: 0.6142 - val_loss: 1.3910 - val_accuracy: 0.6020
Epoch 44/50
359/359 [==============================] - 27s 74ms/step - loss: 1.3469 - accuracy: 0.6138 - val_loss: 1.5481 - val_accuracy: 0.6159
Epoch 45/50
359/359 [==============================] - 27s 76ms/step - loss: 1.4966 - accuracy: 0.6078 - val_loss: 1.3670 - val_accuracy: 0.6082
Epoch 46/50
359/359 [==============================] - 26s 74ms/step - loss: 1.4755 - accuracy: 0.6089 - val_loss: 1.4709 - val_accuracy: 0.6110
Epoch 47/50
359/359 [==============================] - 28s 77ms/step - loss: 1.3357 - accuracy: 0.6167 - val_loss: 1.3384 - val_accuracy: 0.6034
Epoch 48/50
359/359 [==============================] - 26s 73ms/step - loss: 1.3152 - accuracy: 0.6198 - val_loss: 1.3498 - val_accuracy: 0.6187
Epoch 49/50
359/359 [==============================] - 27s 76ms/step - loss: 1.3051 - accuracy: 0.6192 - val_loss: 1.3272 - val_accuracy: 0.6208
Epoch 50/50
359/359 [==============================] - 27s 76ms/step - loss: 1.3475 - accuracy: 0.6189 - val_loss: 1.3427 - val_accuracy: 0.6264
In [16]:
# get the list of all file paths in train_ds
file_paths = train_ds.file_paths

# randomly select 10 file paths
random_files = random.sample(file_paths, 10)

# load the images corresponding to the random file paths
images = []
for file_path in random_files:
    img = tf.keras.preprocessing.image.load_img(file_path, target_size=(224, 224))
    img_array = tf.keras.preprocessing.image.img_to_array(img)
    images.append(img_array)

Evaluating Model by Plotting Loss Curve¶

In [17]:
fig , ax = plt.subplots(1,2)
train_acc = history.history['accuracy']
train_loss = history.history['loss']
fig.set_size_inches(12,4)

ax[0].plot(history.history['accuracy'])
ax[0].plot(history.history['val_accuracy'])
ax[0].set_title('Training Accuracy vs Validation Accuracy')
ax[0].set_ylabel('Accuracy')
ax[0].set_xlabel('Epoch')
ax[0].legend(['Train', 'Validation'], loc='upper left')

ax[1].plot(history.history['loss'])
ax[1].plot(history.history['val_loss'])
ax[1].set_title('Training Loss vs Validation Loss')
ax[1].set_ylabel('Loss')
ax[1].set_xlabel('Epoch')
ax[1].legend(['Train', 'Validation'], loc='upper left')

plt.show()
In [18]:
fig1 = plt.gcf()
plt.plot(history.history['accuracy'])
plt.axis(ymin=0.4,ymax=1)
plt.grid()
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.legend(['train'])
plt.show()

save model to use as pre-trained model¶

In [23]:
model1.save('model_1.h5')
model1.save_weights('model_weights.h5')
In [25]:
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dropout, BatchNormalization, Flatten, Dense
from tensorflow.keras import regularizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator

Origial Content¶

Model 2¶

here we have changed learning rate, number of epochs, batch size and number of filters¶

In [26]:
model_2 = tf.keras.models.Sequential([
Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=(48, 48, 1)),
Conv2D(64, (3,3), padding='same', activation='relu'),
BatchNormalization(),
MaxPool2D(pool_size=(2, 2)),
Dropout(0.3),

Conv2D(128, (5,5), padding='same', activation='relu'),
BatchNormalization(),
MaxPool2D(pool_size=(2, 2)),
Dropout(0.4),

Conv2D(256, (3,3), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.01)),
BatchNormalization(),
MaxPool2D(pool_size=(2, 2)),
Dropout(0.5),

Flatten(),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(0.5),

Dense(128, activation='relu'),
BatchNormalization(),
Dropout(0.5),

Dense(7, activation='softmax')
])
In [27]:
optimizer = Adam(lr=0.001)
model_2.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
WARNING:absl:`lr` is deprecated, please use `learning_rate` instead, or use the legacy optimizer, e.g.,tf.keras.optimizers.legacy.Adam.
In [28]:
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=10,
zoom_range=0.1,
horizontal_flip=True
)

validation_datagen = ImageDataGenerator(
rescale=1./255
)
In [31]:
train_generator = train_datagen.flow_from_directory(
'/content/archive/train',
target_size=(48, 48),
batch_size=64,
color_mode='grayscale',
class_mode='categorical'
)

validation_generator = validation_datagen.flow_from_directory(
'/content/archive/test',
target_size=(48, 48),
batch_size=64,
color_mode='grayscale',
class_mode='categorical'
)

history_2 = model_2.fit(
train_generator,
steps_per_epoch=train_generator.samples/train_generator.batch_size,
epochs=5,
validation_data=validation_generator,
validation_steps=validation_generator.samples/validation_generator.batch_size,
verbose=1
)
Found 28709 images belonging to 7 classes.
Found 7178 images belonging to 7 classes.
Epoch 1/5
448/448 [==============================] - 37s 83ms/step - loss: 2.2919 - accuracy: 0.2979 - val_loss: 2.3117 - val_accuracy: 0.3217
Epoch 2/5
448/448 [==============================] - 37s 82ms/step - loss: 1.7883 - accuracy: 0.3959 - val_loss: 1.7279 - val_accuracy: 0.4071
Epoch 3/5
448/448 [==============================] - 35s 79ms/step - loss: 1.7023 - accuracy: 0.4387 - val_loss: 1.7077 - val_accuracy: 0.4280
Epoch 4/5
448/448 [==============================] - 36s 80ms/step - loss: 1.6777 - accuracy: 0.4614 - val_loss: 1.6121 - val_accuracy: 0.4882
Epoch 5/5
448/448 [==============================] - 36s 79ms/step - loss: 1.6506 - accuracy: 0.4749 - val_loss: 1.5657 - val_accuracy: 0.5104
In [ ]:
fig1 = plt.gcf()
plt.plot(history_2.history['accuracy'])
plt.axis(ymin=0.4,ymax=1)
plt.grid()
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.legend(['train'])
plt.show()
In [ ]:
fig , ax = plt.subplots(1,2)
train_acc = history_2.history['accuracy']
train_loss = history_2.history['loss']
fig.set_size_inches(12,4)

ax[0].plot(history_2.history['accuracy'])
ax[0].plot(history_2.history['val_accuracy'])
ax[0].set_title('Training Accuracy vs Validation Accuracy')
ax[0].set_ylabel('Accuracy')
ax[0].set_xlabel('Epoch')
ax[0].legend(['Train', 'Validation'], loc='upper left')

ax[1].plot(history_2.history['loss'])
ax[1].plot(history_2.history['val_loss'])
ax[1].set_title('Training Loss vs Validation Loss')
ax[1].set_ylabel('Loss')
ax[1].set_xlabel('Epoch')
ax[1].legend(['Train', 'Validation'], loc='upper left')

plt.show()

It is clear from the Accuracy and Loss Curve that I am not getting accuracy I am looking for.

In [ ]:
# Evaluate the model on the test data
test_loss, test_acc = model_2.evaluate(train_generator)
print('Test accuracy:', test_acc)
449/449 [==============================] - 22s 48ms/step - loss: 1.4520 - accuracy: 0.6213
Test accuracy: 0.6212685704231262
In [ ]:
# Evaluate the model on the test data
test_loss, test_acc = model_2.evaluate(validation_generator)
print('Test accuracy:', test_acc)
113/113 [==============================] - 3s 29ms/step - loss: 1.5065 - accuracy: 0.6039
Test accuracy: 0.6039286851882935

Here I have applied model with more complex layes and I have changed the activation function to Swish in some of layers¶

In [ ]:
model_3 = tf.keras.models.Sequential([
Conv2D(32, kernel_size=(3, 3), padding='same', activation='swish', input_shape=(48, 48, 1)),
Conv2D(64, (3,3), padding='same', activation='relu'),
BatchNormalization(),
MaxPool2D(pool_size=(2, 2)),
Dropout(0.3),

Conv2D(128, (5,5), padding='same', activation='swish'),
BatchNormalization(),
MaxPool2D(pool_size=(2, 2)),
Dropout(0.4),

Conv2D(512, (3,3), padding='same', activation='swish', kernel_regularizer=regularizers.l2(0.01)),
BatchNormalization(),
MaxPool2D(pool_size=(2, 2)),
Dropout(0.5),

Flatten(),
Dense(256, activation='swish'),
BatchNormalization(),
Dropout(0.5),

Dense(128, activation='swish'),
BatchNormalization(),
Dropout(0.5),

Dense(7, activation='softmax')
])

model_3.summary()
Model: "sequential_8"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_28 (Conv2D)          (None, 48, 48, 32)        320       
                                                                 
 conv2d_29 (Conv2D)          (None, 48, 48, 64)        18496     
                                                                 
 batch_normalization_29 (Bat  (None, 48, 48, 64)       256       
 chNormalization)                                                
                                                                 
 max_pooling2d_20 (MaxPoolin  (None, 24, 24, 64)       0         
 g2D)                                                            
                                                                 
 dropout_29 (Dropout)        (None, 24, 24, 64)        0         
                                                                 
 conv2d_30 (Conv2D)          (None, 24, 24, 128)       204928    
                                                                 
 batch_normalization_30 (Bat  (None, 24, 24, 128)      512       
 chNormalization)                                                
                                                                 
 max_pooling2d_21 (MaxPoolin  (None, 12, 12, 128)      0         
 g2D)                                                            
                                                                 
 dropout_30 (Dropout)        (None, 12, 12, 128)       0         
                                                                 
 conv2d_31 (Conv2D)          (None, 12, 12, 512)       590336    
                                                                 
 batch_normalization_31 (Bat  (None, 12, 12, 512)      2048      
 chNormalization)                                                
                                                                 
 max_pooling2d_22 (MaxPoolin  (None, 6, 6, 512)        0         
 g2D)                                                            
                                                                 
 dropout_31 (Dropout)        (None, 6, 6, 512)         0         
                                                                 
 flatten_7 (Flatten)         (None, 18432)             0         
                                                                 
 dense_20 (Dense)            (None, 256)               4718848   
                                                                 
 batch_normalization_32 (Bat  (None, 256)              1024      
 chNormalization)                                                
                                                                 
 dropout_32 (Dropout)        (None, 256)               0         
                                                                 
 dense_21 (Dense)            (None, 128)               32896     
                                                                 
 batch_normalization_33 (Bat  (None, 128)              512       
 chNormalization)                                                
                                                                 
 dropout_33 (Dropout)        (None, 128)               0         
                                                                 
 dense_22 (Dense)            (None, 7)                 903       
                                                                 
=================================================================
Total params: 5,571,079
Trainable params: 5,568,903
Non-trainable params: 2,176
_________________________________________________________________
In [ ]:
optimizer = Adam(lr=0.001)
model_3.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
WARNING:absl:`lr` is deprecated, please use `learning_rate` instead, or use the legacy optimizer, e.g.,tf.keras.optimizers.legacy.Adam.
In [ ]:
epochs= 60
batch_size=50
history_3 = model_3.fit(x = train_generator,epochs = epochs,validation_data = validation_generator)
Epoch 1/60
449/449 [==============================] - 31s 69ms/step - loss: 2.0706 - accuracy: 0.3176 - val_loss: 1.9007 - val_accuracy: 0.3129
Epoch 2/60
449/449 [==============================] - 30s 67ms/step - loss: 1.8046 - accuracy: 0.4144 - val_loss: 1.7169 - val_accuracy: 0.4666
Epoch 3/60
449/449 [==============================] - 30s 67ms/step - loss: 1.7604 - accuracy: 0.4599 - val_loss: 1.6564 - val_accuracy: 0.5014
Epoch 4/60
449/449 [==============================] - 30s 67ms/step - loss: 1.7344 - accuracy: 0.4794 - val_loss: 1.6645 - val_accuracy: 0.5127
Epoch 5/60
449/449 [==============================] - 29s 64ms/step - loss: 1.7379 - accuracy: 0.4930 - val_loss: 1.6922 - val_accuracy: 0.5230
Epoch 6/60
449/449 [==============================] - 30s 67ms/step - loss: 1.7140 - accuracy: 0.4994 - val_loss: 1.6086 - val_accuracy: 0.5371
Epoch 7/60
449/449 [==============================] - 68s 151ms/step - loss: 1.6905 - accuracy: 0.5120 - val_loss: 1.6370 - val_accuracy: 0.5446
Epoch 8/60
449/449 [==============================] - 31s 68ms/step - loss: 1.7093 - accuracy: 0.5126 - val_loss: 1.6072 - val_accuracy: 0.5443
Epoch 9/60
449/449 [==============================] - 30s 66ms/step - loss: 1.6886 - accuracy: 0.5219 - val_loss: 1.5672 - val_accuracy: 0.5497
Epoch 10/60
449/449 [==============================] - 29s 64ms/step - loss: 1.6747 - accuracy: 0.5257 - val_loss: 1.5592 - val_accuracy: 0.5556
Epoch 11/60
449/449 [==============================] - 33s 73ms/step - loss: 1.6697 - accuracy: 0.5303 - val_loss: 1.6010 - val_accuracy: 0.5553
Epoch 12/60
449/449 [==============================] - 29s 65ms/step - loss: 1.6636 - accuracy: 0.5350 - val_loss: 1.5484 - val_accuracy: 0.5676
Epoch 13/60
449/449 [==============================] - 32s 72ms/step - loss: 1.6989 - accuracy: 0.5343 - val_loss: 1.6207 - val_accuracy: 0.5680
Epoch 14/60
449/449 [==============================] - 29s 64ms/step - loss: 1.6518 - accuracy: 0.5336 - val_loss: 1.5602 - val_accuracy: 0.5645
Epoch 15/60
449/449 [==============================] - 30s 66ms/step - loss: 1.6394 - accuracy: 0.5394 - val_loss: 1.5505 - val_accuracy: 0.5645
Epoch 16/60
449/449 [==============================] - 31s 68ms/step - loss: 1.6476 - accuracy: 0.5398 - val_loss: 1.6532 - val_accuracy: 0.5454
Epoch 17/60
449/449 [==============================] - 30s 66ms/step - loss: 1.6340 - accuracy: 0.5469 - val_loss: 1.5407 - val_accuracy: 0.5702
Epoch 18/60
449/449 [==============================] - 29s 64ms/step - loss: 1.6505 - accuracy: 0.5475 - val_loss: 1.5669 - val_accuracy: 0.5811
Epoch 19/60
449/449 [==============================] - 31s 68ms/step - loss: 1.7408 - accuracy: 0.5437 - val_loss: 1.7698 - val_accuracy: 0.5617
Epoch 20/60
449/449 [==============================] - 29s 65ms/step - loss: 1.7147 - accuracy: 0.5476 - val_loss: 1.5556 - val_accuracy: 0.5780
Epoch 21/60
449/449 [==============================] - 30s 67ms/step - loss: 1.6709 - accuracy: 0.5499 - val_loss: 1.5499 - val_accuracy: 0.5765
Epoch 22/60
449/449 [==============================] - 29s 65ms/step - loss: 1.6733 - accuracy: 0.5534 - val_loss: 1.5583 - val_accuracy: 0.5841
Epoch 23/60
449/449 [==============================] - 29s 65ms/step - loss: 1.6630 - accuracy: 0.5541 - val_loss: 1.5890 - val_accuracy: 0.5756
Epoch 24/60
449/449 [==============================] - 30s 66ms/step - loss: 1.6690 - accuracy: 0.5566 - val_loss: 1.6052 - val_accuracy: 0.5626
Epoch 25/60
449/449 [==============================] - 30s 67ms/step - loss: 1.6221 - accuracy: 0.5608 - val_loss: 1.5023 - val_accuracy: 0.5807
Epoch 26/60
449/449 [==============================] - 31s 70ms/step - loss: 1.6789 - accuracy: 0.5610 - val_loss: 1.6842 - val_accuracy: 0.5770
Epoch 27/60
449/449 [==============================] - 29s 65ms/step - loss: 1.6757 - accuracy: 0.5613 - val_loss: 1.6325 - val_accuracy: 0.5649
Epoch 28/60
449/449 [==============================] - 30s 67ms/step - loss: 1.5996 - accuracy: 0.5633 - val_loss: 1.5822 - val_accuracy: 0.5656
Epoch 29/60
449/449 [==============================] - 29s 65ms/step - loss: 1.6102 - accuracy: 0.5656 - val_loss: 1.4973 - val_accuracy: 0.5887
Epoch 30/60
449/449 [==============================] - 31s 69ms/step - loss: 1.6253 - accuracy: 0.5671 - val_loss: 1.5022 - val_accuracy: 0.5935
Epoch 31/60
449/449 [==============================] - 29s 64ms/step - loss: 1.5886 - accuracy: 0.5708 - val_loss: 1.6365 - val_accuracy: 0.5670
Epoch 32/60
449/449 [==============================] - 31s 70ms/step - loss: 1.6985 - accuracy: 0.5613 - val_loss: 1.6364 - val_accuracy: 0.5978
Epoch 33/60
449/449 [==============================] - 29s 64ms/step - loss: 1.6741 - accuracy: 0.5651 - val_loss: 1.5484 - val_accuracy: 0.5913
Epoch 34/60
449/449 [==============================] - 30s 66ms/step - loss: 1.9269 - accuracy: 0.5583 - val_loss: 1.9458 - val_accuracy: 0.5637
Epoch 35/60
449/449 [==============================] - 32s 71ms/step - loss: 1.7627 - accuracy: 0.5643 - val_loss: 1.5818 - val_accuracy: 0.5925
Epoch 36/60
449/449 [==============================] - 30s 67ms/step - loss: 1.6701 - accuracy: 0.5699 - val_loss: 1.8428 - val_accuracy: 0.5854
Epoch 37/60
449/449 [==============================] - 29s 66ms/step - loss: 1.6115 - accuracy: 0.5780 - val_loss: 1.5065 - val_accuracy: 0.5872
Epoch 38/60
449/449 [==============================] - 30s 66ms/step - loss: 1.6093 - accuracy: 0.5784 - val_loss: 1.5191 - val_accuracy: 0.5929
Epoch 39/60
449/449 [==============================] - 30s 66ms/step - loss: 1.6455 - accuracy: 0.5746 - val_loss: 1.5397 - val_accuracy: 0.5977
Epoch 40/60
449/449 [==============================] - 30s 67ms/step - loss: 1.6010 - accuracy: 0.5761 - val_loss: 1.4837 - val_accuracy: 0.5972
Epoch 41/60
449/449 [==============================] - 30s 67ms/step - loss: 1.5637 - accuracy: 0.5778 - val_loss: 1.4899 - val_accuracy: 0.5982
Epoch 42/60
449/449 [==============================] - 29s 65ms/step - loss: 1.5735 - accuracy: 0.5749 - val_loss: 1.4818 - val_accuracy: 0.5968
Epoch 43/60
449/449 [==============================] - 30s 67ms/step - loss: 1.5892 - accuracy: 0.5793 - val_loss: 1.5609 - val_accuracy: 0.6031
Epoch 44/60
449/449 [==============================] - 30s 67ms/step - loss: 1.6049 - accuracy: 0.5794 - val_loss: 1.5816 - val_accuracy: 0.5756
Epoch 45/60
449/449 [==============================] - 29s 64ms/step - loss: 1.6448 - accuracy: 0.5763 - val_loss: 1.5766 - val_accuracy: 0.5908
Epoch 46/60
449/449 [==============================] - 30s 66ms/step - loss: 1.5680 - accuracy: 0.5793 - val_loss: 1.4561 - val_accuracy: 0.6017
Epoch 47/60
449/449 [==============================] - 31s 68ms/step - loss: 1.6357 - accuracy: 0.5794 - val_loss: 1.5130 - val_accuracy: 0.5938
Epoch 48/60
449/449 [==============================] - 30s 66ms/step - loss: 1.5624 - accuracy: 0.5872 - val_loss: 1.5598 - val_accuracy: 0.6048
Epoch 49/60
449/449 [==============================] - 29s 65ms/step - loss: 1.6062 - accuracy: 0.5816 - val_loss: 1.6514 - val_accuracy: 0.6028
Epoch 50/60
449/449 [==============================] - 31s 68ms/step - loss: 1.6088 - accuracy: 0.5845 - val_loss: 1.5405 - val_accuracy: 0.6021
Epoch 51/60
449/449 [==============================] - 29s 64ms/step - loss: 1.5924 - accuracy: 0.5809 - val_loss: 1.5389 - val_accuracy: 0.5963
Epoch 52/60
449/449 [==============================] - 30s 66ms/step - loss: 1.5493 - accuracy: 0.5858 - val_loss: 1.5425 - val_accuracy: 0.5758
Epoch 53/60
449/449 [==============================] - 32s 71ms/step - loss: 1.5712 - accuracy: 0.5859 - val_loss: 1.4967 - val_accuracy: 0.5993
Epoch 54/60
449/449 [==============================] - 29s 64ms/step - loss: 1.5721 - accuracy: 0.5874 - val_loss: 1.4550 - val_accuracy: 0.6027
Epoch 55/60
449/449 [==============================] - 32s 70ms/step - loss: 1.5841 - accuracy: 0.5873 - val_loss: 1.6010 - val_accuracy: 0.6119
Epoch 56/60
449/449 [==============================] - 29s 64ms/step - loss: 1.6508 - accuracy: 0.5844 - val_loss: 1.5471 - val_accuracy: 0.6046
Epoch 57/60
449/449 [==============================] - 30s 67ms/step - loss: 1.5733 - accuracy: 0.5887 - val_loss: 1.5251 - val_accuracy: 0.5890
Epoch 58/60
449/449 [==============================] - 32s 71ms/step - loss: 1.6055 - accuracy: 0.5914 - val_loss: 1.4596 - val_accuracy: 0.5986
Epoch 59/60
449/449 [==============================] - 30s 67ms/step - loss: 1.5428 - accuracy: 0.5908 - val_loss: 1.4837 - val_accuracy: 0.6031
Epoch 60/60
449/449 [==============================] - 33s 73ms/step - loss: 1.5783 - accuracy: 0.5902 - val_loss: 1.5603 - val_accuracy: 0.5995
In [ ]:
fig2 = plt.gcf()
plt.plot(history_3.history['accuracy'])
plt.axis(ymin=0.4,ymax=1)
plt.grid()
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.legend(['train'])
plt.show()
In [ ]:
fig2 , ax = plt.subplots(1,2)
train_acc = history_3.history['accuracy']
train_loss = history_3.history['loss']
fig2.set_size_inches(12,4)

ax[0].plot(history_3.history['accuracy'])
ax[0].plot(history_3.history['val_accuracy'])
ax[0].set_title('Training Accuracy vs Validation Accuracy')
ax[0].set_ylabel('Accuracy')
ax[0].set_xlabel('Epoch')
ax[0].legend(['Train', 'Validation'], loc='upper left')

ax[1].plot(history_3.history['loss'])
ax[1].plot(history_3.history['val_loss'])
ax[1].set_title('Training Loss vs Validation Loss')
ax[1].set_ylabel('Loss')
ax[1].set_xlabel('Epoch')
ax[1].legend(['Train', 'Validation'], loc='upper left')

plt.show()

Still i am not happy with the result.¶

In [ ]:
# Evaluate the model on the test data
test_loss, test_acc = model_3.evaluate(train_generator)
print('Test accuracy:', test_acc)
449/449 [==============================] - 22s 50ms/step - loss: 1.4892 - accuracy: 0.6271
Test accuracy: 0.6271204352378845
In [ ]:
# Evaluate the model on the test data
test_loss, test_acc = model_3.evaluate(validation_generator)
print('Test accuracy:', test_acc)
113/113 [==============================] - 3s 23ms/step - loss: 1.5603 - accuracy: 0.5995
Test accuracy: 0.5994706153869629

So now I am going to use Data Augmentation technique to and will increase size of data¶

In [ ]:
# Extracting the dataset
zip_ref = zipfile.ZipFile("archive.zip", "r")
zip_ref.extractall()
zip_ref.close()

dataset_path = "/content/archive"

labels = os.listdir(dataset_path)
print(labels)

for label in labels:
    print(os.listdir(os.path.join(dataset_path, label)))  

# Preparing training and validation directories
train_dir = "/content/archive/train"
test_dir = "/content/archive/test"

# Defining ImageDataGenerator for data augmentation
img_size = 48
train_datagen = ImageDataGenerator(
    rescale = 1./255,
    rotation_range = 20,
    zoom_range = 0.2,
    width_shift_range = 0.1,
    height_shift_range = 0.1,
    shear_range = 0.2,
    horizontal_flip = True,
    vertical_flip = True,
    fill_mode = 'nearest',
    validation_split = 0.2
)

# Preparing training data using ImageDataGenerator
train_generator = train_datagen.flow_from_directory(
    directory = train_dir,
    target_size = (img_size, img_size),
    batch_size = 64,
    color_mode = "grayscale",
    class_mode = "categorical",
    subset = "training"
)

# Preparing validation data using ImageDataGenerator
validation_datagen = ImageDataGenerator(
    rescale = 1./255,
    validation_split = 0.2
)

validation_generator = validation_datagen.flow_from_directory(
    directory = test_dir,
    target_size = (img_size, img_size),
    batch_size = 64,
    color_mode = "grayscale",
    class_mode = "categorical",
    subset = "validation"
)

# Loading an image for visualization
img = Image.open('/content/archive/test/angry/im106.png')
img.show()

img = load_img('/content/archive/test/angry/im106.png', target_size=(48, 48), color_mode="grayscale")
img = np.array(img)
plt.imshow(img)
print(img.shape)

# Preparing training data using tf.keras.preprocessing.image_dataset_from_directory
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    directory = dataset_path,
    validation_split = 0.2,
    subset = "training",
    seed = 123,
    image_size = (224, 224),
    batch_size = 32
)
['test', 'train']
['fearful', 'disgusted', 'angry', 'neutral', 'surprised', 'sad', 'happy']
['fearful', 'disgusted', 'angry', 'neutral', 'surprised', 'sad', 'happy']
Found 22968 images belonging to 7 classes.
Found 1432 images belonging to 7 classes.
(48, 48)
Found 35887 files belonging to 2 classes.
Using 28710 files for training.
In [ ]:
from tensorflow.keras.preprocessing.image import load_img, img_to_array

img = load_img("../content/archive/test/fearful/im1002.png",target_size = (48,48),color_mode = "grayscale")
img = np.array(img)
plt.imshow(img)
print(img.shape)

import cv2
import numpy as np

label_dict = {0:'Angry',1:'Disgust',2:'Fear',3:'Happy',4:'Neutral',5:'Sad',6:'Surprise'}

img = cv2.imread('/content/archive/test/fearful/im1002.png', cv2.IMREAD_GRAYSCALE)

if img is None:
    raise ValueError('Failed to load image. Check if image path is correct and image file exists.')
elif img.size == 0:
    raise ValueError('Input image is empty or has invalid dimensions. Check the image file.')

img = cv2.resize(img, (227, 227))
img = np.repeat(img[..., np.newaxis], 3, -1)

result = model.predict(np.array([img]))
result = list(result[0])
print(result)

img_index = result.index(max(result))
print(label_dict[img_index])

plt.imshow(img)
plt.show()
(48, 48)
1/1 [==============================] - 0s 22ms/step
[0.0, 0.0, 9.047387e-16, 0.0, 1.0, 0.0, 0.0]
Neutral
In [ ]:
# Creating and compiling the model
from tensorflow.keras.optimizers import Adam, RMSprop, SGD, Adamax

model_4 = Sequential()
model_4.add(Conv2D(32, kernel_size = (3, 3), padding = 'same', activation = 'relu', input_shape = (48, 48, 1)))
model_4.add(Conv2D(64, (3, 3), padding = 'same', activation = 'relu'))
model_4.add(BatchNormalization())
model_4.add(MaxPool2D(pool_size = (2, 2)))
model_4.add(Dropout(0.25))

model_4.add(Conv2D(128, (5, 5), padding = 'same', activation = 'relu'))
model_4.add(Conv2D(256, (5, 5), padding = 'same', activation = 'relu'))
model_4.add(Conv2D(512, (5, 5), padding = 'same', activation = 'swish'))
model_4.add(BatchNormalization())
model_4.add(MaxPool2D(pool_size = (2, 2)))
model_4.add(Dropout(0.25))

model_4.add(Flatten()),
model_4.add(Dense(256, activation='relu'))
model_4.add(BatchNormalization())
model_4.add(Dropout(0.5))

model_4.add(Dense(128, activation = 'relu'))
model_4.add(BatchNormalization())
model_4.add(Dropout(0.5))

model_4.add(Dense(7, activation = 'softmax'))

model_4.summary()
Model: "sequential_15"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_60 (Conv2D)          (None, 48, 48, 32)        320       
                                                                 
 conv2d_61 (Conv2D)          (None, 48, 48, 64)        18496     
                                                                 
 batch_normalization_56 (Bat  (None, 48, 48, 64)       256       
 chNormalization)                                                
                                                                 
 max_pooling2d_35 (MaxPoolin  (None, 24, 24, 64)       0         
 g2D)                                                            
                                                                 
 dropout_56 (Dropout)        (None, 24, 24, 64)        0         
                                                                 
 conv2d_62 (Conv2D)          (None, 24, 24, 128)       204928    
                                                                 
 conv2d_63 (Conv2D)          (None, 24, 24, 256)       819456    
                                                                 
 conv2d_64 (Conv2D)          (None, 24, 24, 512)       3277312   
                                                                 
 batch_normalization_57 (Bat  (None, 24, 24, 512)      2048      
 chNormalization)                                                
                                                                 
 max_pooling2d_36 (MaxPoolin  (None, 12, 12, 512)      0         
 g2D)                                                            
                                                                 
 dropout_57 (Dropout)        (None, 12, 12, 512)       0         
                                                                 
 flatten_14 (Flatten)        (None, 73728)             0         
                                                                 
 dense_39 (Dense)            (None, 256)               18874624  
                                                                 
 batch_normalization_58 (Bat  (None, 256)              1024      
 chNormalization)                                                
                                                                 
 dropout_58 (Dropout)        (None, 256)               0         
                                                                 
 dense_40 (Dense)            (None, 128)               32896     
                                                                 
 batch_normalization_59 (Bat  (None, 128)              512       
 chNormalization)                                                
                                                                 
 dropout_59 (Dropout)        (None, 128)               0         
                                                                 
 dense_41 (Dense)            (None, 7)                 903       
                                                                 
=================================================================
Total params: 23,232,775
Trainable params: 23,230,855
Non-trainable params: 1,920
_________________________________________________________________
In [ ]:
optimizer = Adam(lr=0.001)
model_4.compile(optimizer = optimizer, loss = 'categorical_crossentropy', metrics = ['accuracy'])
WARNING:absl:`lr` is deprecated, please use `learning_rate` instead, or use the legacy optimizer, e.g.,tf.keras.optimizers.legacy.Adam.

Here I am going for 100 epochs with batch size of 40¶

In [ ]:
epochs= 100
batch_size=40
history_4 = model_4.fit(x = train_generator,epochs = epochs,validation_data = validation_generator)
Epoch 1/100
359/359 [==============================] - 77s 94ms/step - loss: 2.2728 - accuracy: 0.1957 - val_loss: 1.8245 - val_accuracy: 0.2465
Epoch 2/100
359/359 [==============================] - 34s 96ms/step - loss: 1.8999 - accuracy: 0.2323 - val_loss: 1.7758 - val_accuracy: 0.2689
Epoch 3/100
359/359 [==============================] - 34s 95ms/step - loss: 1.8164 - accuracy: 0.2522 - val_loss: 1.7802 - val_accuracy: 0.2709
Epoch 4/100
359/359 [==============================] - 33s 92ms/step - loss: 1.7824 - accuracy: 0.2668 - val_loss: 1.9309 - val_accuracy: 0.2493
Epoch 5/100
359/359 [==============================] - 35s 97ms/step - loss: 1.7334 - accuracy: 0.2985 - val_loss: 1.6452 - val_accuracy: 0.3331
Epoch 6/100
359/359 [==============================] - 34s 95ms/step - loss: 1.6903 - accuracy: 0.3207 - val_loss: 1.6060 - val_accuracy: 0.3624
Epoch 7/100
359/359 [==============================] - 34s 95ms/step - loss: 1.6508 - accuracy: 0.3440 - val_loss: 1.6774 - val_accuracy: 0.3457
Epoch 8/100
359/359 [==============================] - 34s 93ms/step - loss: 1.6055 - accuracy: 0.3670 - val_loss: 1.6336 - val_accuracy: 0.3904
Epoch 9/100
359/359 [==============================] - 33s 93ms/step - loss: 1.5296 - accuracy: 0.4047 - val_loss: 1.5944 - val_accuracy: 0.3659
Epoch 10/100
359/359 [==============================] - 33s 93ms/step - loss: 1.4910 - accuracy: 0.4252 - val_loss: 1.4015 - val_accuracy: 0.4721
Epoch 11/100
359/359 [==============================] - 33s 93ms/step - loss: 1.4289 - accuracy: 0.4460 - val_loss: 1.4617 - val_accuracy: 0.4525
Epoch 12/100
359/359 [==============================] - 33s 93ms/step - loss: 1.3832 - accuracy: 0.4688 - val_loss: 1.3259 - val_accuracy: 0.4916
Epoch 13/100
359/359 [==============================] - 33s 93ms/step - loss: 1.3484 - accuracy: 0.4868 - val_loss: 1.2273 - val_accuracy: 0.5363
Epoch 14/100
359/359 [==============================] - 33s 92ms/step - loss: 1.3200 - accuracy: 0.4969 - val_loss: 1.2162 - val_accuracy: 0.5300
Epoch 15/100
359/359 [==============================] - 33s 93ms/step - loss: 1.2920 - accuracy: 0.5057 - val_loss: 1.2331 - val_accuracy: 0.5328
Epoch 16/100
359/359 [==============================] - 33s 93ms/step - loss: 1.2611 - accuracy: 0.5206 - val_loss: 1.1751 - val_accuracy: 0.5461
Epoch 17/100
359/359 [==============================] - 33s 93ms/step - loss: 1.2468 - accuracy: 0.5269 - val_loss: 1.2542 - val_accuracy: 0.5244
Epoch 18/100
359/359 [==============================] - 33s 92ms/step - loss: 1.2401 - accuracy: 0.5282 - val_loss: 1.1372 - val_accuracy: 0.5817
Epoch 19/100
359/359 [==============================] - 33s 93ms/step - loss: 1.2172 - accuracy: 0.5351 - val_loss: 1.1293 - val_accuracy: 0.5594
Epoch 20/100
359/359 [==============================] - 33s 91ms/step - loss: 1.1996 - accuracy: 0.5475 - val_loss: 1.1713 - val_accuracy: 0.5482
Epoch 21/100
359/359 [==============================] - 33s 93ms/step - loss: 1.1905 - accuracy: 0.5515 - val_loss: 1.0890 - val_accuracy: 0.5908
Epoch 22/100
359/359 [==============================] - 33s 93ms/step - loss: 1.1770 - accuracy: 0.5549 - val_loss: 1.1273 - val_accuracy: 0.5740
Epoch 23/100
359/359 [==============================] - 33s 93ms/step - loss: 1.1650 - accuracy: 0.5599 - val_loss: 1.0763 - val_accuracy: 0.5999
Epoch 24/100
359/359 [==============================] - 33s 93ms/step - loss: 1.1462 - accuracy: 0.5687 - val_loss: 1.1067 - val_accuracy: 0.5698
Epoch 25/100
359/359 [==============================] - 33s 93ms/step - loss: 1.1454 - accuracy: 0.5691 - val_loss: 1.0720 - val_accuracy: 0.6138
Epoch 26/100
359/359 [==============================] - 33s 93ms/step - loss: 1.1379 - accuracy: 0.5730 - val_loss: 1.0705 - val_accuracy: 0.6027
Epoch 27/100
359/359 [==============================] - 33s 92ms/step - loss: 1.1240 - accuracy: 0.5764 - val_loss: 1.0827 - val_accuracy: 0.5950
Epoch 28/100
359/359 [==============================] - 33s 92ms/step - loss: 1.1210 - accuracy: 0.5775 - val_loss: 1.1011 - val_accuracy: 0.5999
Epoch 29/100
359/359 [==============================] - 34s 95ms/step - loss: 1.1022 - accuracy: 0.5853 - val_loss: 1.1103 - val_accuracy: 0.5957
Epoch 30/100
359/359 [==============================] - 33s 91ms/step - loss: 1.1021 - accuracy: 0.5845 - val_loss: 1.1183 - val_accuracy: 0.5719
Epoch 31/100
359/359 [==============================] - 33s 93ms/step - loss: 1.0856 - accuracy: 0.5921 - val_loss: 1.0873 - val_accuracy: 0.5873
Epoch 32/100
359/359 [==============================] - 33s 92ms/step - loss: 1.0877 - accuracy: 0.5904 - val_loss: 1.0712 - val_accuracy: 0.5859
Epoch 33/100
359/359 [==============================] - 33s 92ms/step - loss: 1.0835 - accuracy: 0.5949 - val_loss: 1.0829 - val_accuracy: 0.5859
Epoch 34/100
359/359 [==============================] - 34s 93ms/step - loss: 1.0744 - accuracy: 0.5981 - val_loss: 1.0483 - val_accuracy: 0.6145
Epoch 35/100
359/359 [==============================] - 34s 94ms/step - loss: 1.0687 - accuracy: 0.6016 - val_loss: 1.0257 - val_accuracy: 0.6264
Epoch 36/100
359/359 [==============================] - 33s 93ms/step - loss: 1.0642 - accuracy: 0.6004 - val_loss: 1.1158 - val_accuracy: 0.5803
Epoch 37/100
359/359 [==============================] - 33s 90ms/step - loss: 1.0520 - accuracy: 0.6088 - val_loss: 1.0566 - val_accuracy: 0.6089
Epoch 38/100
359/359 [==============================] - 33s 92ms/step - loss: 1.0487 - accuracy: 0.6111 - val_loss: 1.0712 - val_accuracy: 0.6020
Epoch 39/100
359/359 [==============================] - 33s 92ms/step - loss: 1.0463 - accuracy: 0.6071 - val_loss: 1.0440 - val_accuracy: 0.6152
Epoch 40/100
359/359 [==============================] - 33s 91ms/step - loss: 1.0410 - accuracy: 0.6087 - val_loss: 1.0140 - val_accuracy: 0.6292
Epoch 41/100
359/359 [==============================] - 33s 93ms/step - loss: 1.0245 - accuracy: 0.6179 - val_loss: 1.0499 - val_accuracy: 0.6096
Epoch 42/100
359/359 [==============================] - 33s 92ms/step - loss: 1.0228 - accuracy: 0.6156 - val_loss: 0.9943 - val_accuracy: 0.6229
Epoch 43/100
359/359 [==============================] - 33s 92ms/step - loss: 1.0216 - accuracy: 0.6193 - val_loss: 1.0383 - val_accuracy: 0.6117
Epoch 44/100
359/359 [==============================] - 33s 92ms/step - loss: 1.0173 - accuracy: 0.6203 - val_loss: 1.0328 - val_accuracy: 0.6173
Epoch 45/100
359/359 [==============================] - 33s 93ms/step - loss: 1.0163 - accuracy: 0.6237 - val_loss: 1.0508 - val_accuracy: 0.5964
Epoch 46/100
359/359 [==============================] - 33s 91ms/step - loss: 1.0081 - accuracy: 0.6235 - val_loss: 1.0523 - val_accuracy: 0.6041
Epoch 47/100
359/359 [==============================] - 34s 94ms/step - loss: 0.9984 - accuracy: 0.6279 - val_loss: 0.9930 - val_accuracy: 0.6243
Epoch 48/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9895 - accuracy: 0.6320 - val_loss: 0.9713 - val_accuracy: 0.6299
Epoch 49/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9965 - accuracy: 0.6311 - val_loss: 1.0213 - val_accuracy: 0.6229
Epoch 50/100
359/359 [==============================] - 33s 91ms/step - loss: 0.9887 - accuracy: 0.6296 - val_loss: 0.9923 - val_accuracy: 0.6327
Epoch 51/100
359/359 [==============================] - 33s 92ms/step - loss: 0.9910 - accuracy: 0.6330 - val_loss: 1.0475 - val_accuracy: 0.5992
Epoch 52/100
359/359 [==============================] - 33s 92ms/step - loss: 0.9788 - accuracy: 0.6352 - val_loss: 0.9752 - val_accuracy: 0.6404
Epoch 53/100
359/359 [==============================] - 33s 92ms/step - loss: 0.9831 - accuracy: 0.6357 - val_loss: 0.9781 - val_accuracy: 0.6369
Epoch 54/100
359/359 [==============================] - 34s 93ms/step - loss: 0.9719 - accuracy: 0.6380 - val_loss: 1.0128 - val_accuracy: 0.6376
Epoch 55/100
359/359 [==============================] - 34s 93ms/step - loss: 0.9723 - accuracy: 0.6366 - val_loss: 0.9688 - val_accuracy: 0.6313
Epoch 56/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9619 - accuracy: 0.6408 - val_loss: 1.0088 - val_accuracy: 0.6299
Epoch 57/100
359/359 [==============================] - 32s 90ms/step - loss: 0.9641 - accuracy: 0.6425 - val_loss: 0.9681 - val_accuracy: 0.6515
Epoch 58/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9560 - accuracy: 0.6458 - val_loss: 0.9988 - val_accuracy: 0.6299
Epoch 59/100
359/359 [==============================] - 34s 94ms/step - loss: 0.9515 - accuracy: 0.6453 - val_loss: 0.9812 - val_accuracy: 0.6327
Epoch 60/100
359/359 [==============================] - 33s 92ms/step - loss: 0.9526 - accuracy: 0.6462 - val_loss: 1.0278 - val_accuracy: 0.6313
Epoch 61/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9419 - accuracy: 0.6520 - val_loss: 0.9908 - val_accuracy: 0.6278
Epoch 62/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9371 - accuracy: 0.6555 - val_loss: 0.9662 - val_accuracy: 0.6501
Epoch 63/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9324 - accuracy: 0.6532 - val_loss: 1.0020 - val_accuracy: 0.6341
Epoch 64/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9394 - accuracy: 0.6506 - val_loss: 0.9748 - val_accuracy: 0.6480
Epoch 65/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9320 - accuracy: 0.6560 - val_loss: 0.9920 - val_accuracy: 0.6404
Epoch 66/100
359/359 [==============================] - 33s 92ms/step - loss: 0.9277 - accuracy: 0.6570 - val_loss: 0.9544 - val_accuracy: 0.6494
Epoch 67/100
359/359 [==============================] - 33s 93ms/step - loss: 0.9212 - accuracy: 0.6590 - val_loss: 0.9581 - val_accuracy: 0.6466
Epoch 68/100
359/359 [==============================] - 33s 91ms/step - loss: 0.9175 - accuracy: 0.6609 - val_loss: 0.9665 - val_accuracy: 0.6501
Epoch 69/100
359/359 [==============================] - 33s 92ms/step - loss: 0.9178 - accuracy: 0.6617 - val_loss: 0.9769 - val_accuracy: 0.6439
Epoch 70/100
359/359 [==============================] - 33s 92ms/step - loss: 0.9162 - accuracy: 0.6627 - val_loss: 1.0420 - val_accuracy: 0.6257
Epoch 71/100
359/359 [==============================] - 32s 90ms/step - loss: 0.9133 - accuracy: 0.6617 - val_loss: 1.0044 - val_accuracy: 0.6411
Epoch 72/100
359/359 [==============================] - 33s 92ms/step - loss: 0.9019 - accuracy: 0.6674 - val_loss: 0.9707 - val_accuracy: 0.6522
Epoch 73/100
359/359 [==============================] - 33s 92ms/step - loss: 0.9048 - accuracy: 0.6658 - val_loss: 0.9438 - val_accuracy: 0.6585
Epoch 74/100
359/359 [==============================] - 33s 91ms/step - loss: 0.9013 - accuracy: 0.6696 - val_loss: 0.9875 - val_accuracy: 0.6466
Epoch 75/100
359/359 [==============================] - 33s 91ms/step - loss: 0.9047 - accuracy: 0.6672 - val_loss: 0.9871 - val_accuracy: 0.6348
Epoch 76/100
359/359 [==============================] - 33s 92ms/step - loss: 0.8995 - accuracy: 0.6675 - val_loss: 0.9610 - val_accuracy: 0.6550
Epoch 77/100
359/359 [==============================] - 33s 91ms/step - loss: 0.8941 - accuracy: 0.6731 - val_loss: 0.9839 - val_accuracy: 0.6466
Epoch 78/100
359/359 [==============================] - 32s 90ms/step - loss: 0.8934 - accuracy: 0.6739 - val_loss: 1.0237 - val_accuracy: 0.6292
Epoch 79/100
359/359 [==============================] - 33s 92ms/step - loss: 0.8921 - accuracy: 0.6734 - val_loss: 0.9636 - val_accuracy: 0.6425
Epoch 80/100
359/359 [==============================] - 33s 92ms/step - loss: 0.8839 - accuracy: 0.6736 - val_loss: 0.9745 - val_accuracy: 0.6425
Epoch 81/100
359/359 [==============================] - 33s 91ms/step - loss: 0.8804 - accuracy: 0.6723 - val_loss: 0.9712 - val_accuracy: 0.6501
Epoch 82/100
359/359 [==============================] - 33s 91ms/step - loss: 0.8735 - accuracy: 0.6784 - val_loss: 0.9436 - val_accuracy: 0.6571
Epoch 83/100
359/359 [==============================] - 33s 93ms/step - loss: 0.8789 - accuracy: 0.6782 - val_loss: 1.0188 - val_accuracy: 0.6487
Epoch 84/100
359/359 [==============================] - 33s 91ms/step - loss: 0.8747 - accuracy: 0.6792 - val_loss: 1.0007 - val_accuracy: 0.6453
Epoch 85/100
359/359 [==============================] - 33s 92ms/step - loss: 0.8710 - accuracy: 0.6833 - val_loss: 0.9961 - val_accuracy: 0.6466
Epoch 86/100
359/359 [==============================] - 33s 91ms/step - loss: 0.8658 - accuracy: 0.6809 - val_loss: 0.9896 - val_accuracy: 0.6390
Epoch 87/100
359/359 [==============================] - 33s 92ms/step - loss: 0.8617 - accuracy: 0.6831 - val_loss: 1.0238 - val_accuracy: 0.6453
Epoch 88/100
359/359 [==============================] - 33s 92ms/step - loss: 0.8638 - accuracy: 0.6828 - val_loss: 0.9863 - val_accuracy: 0.6425
Epoch 89/100
359/359 [==============================] - 34s 94ms/step - loss: 0.8589 - accuracy: 0.6856 - val_loss: 0.9686 - val_accuracy: 0.6522
Epoch 90/100
359/359 [==============================] - 33s 91ms/step - loss: 0.8564 - accuracy: 0.6856 - val_loss: 0.9612 - val_accuracy: 0.6501
Epoch 91/100
359/359 [==============================] - 33s 91ms/step - loss: 0.8463 - accuracy: 0.6880 - val_loss: 0.9824 - val_accuracy: 0.6501
Epoch 92/100
359/359 [==============================] - 33s 92ms/step - loss: 0.8514 - accuracy: 0.6873 - val_loss: 1.0089 - val_accuracy: 0.6306
Epoch 93/100
359/359 [==============================] - 33s 92ms/step - loss: 0.8444 - accuracy: 0.6910 - val_loss: 0.9738 - val_accuracy: 0.6494
Epoch 94/100
359/359 [==============================] - 32s 90ms/step - loss: 0.8474 - accuracy: 0.6875 - val_loss: 0.9960 - val_accuracy: 0.6418
Epoch 95/100
359/359 [==============================] - 33s 93ms/step - loss: 0.8436 - accuracy: 0.6906 - val_loss: 0.9765 - val_accuracy: 0.6571
Epoch 96/100
359/359 [==============================] - 33s 92ms/step - loss: 0.8359 - accuracy: 0.6971 - val_loss: 0.9705 - val_accuracy: 0.6522
Epoch 97/100
359/359 [==============================] - 34s 94ms/step - loss: 0.8366 - accuracy: 0.6899 - val_loss: 0.9928 - val_accuracy: 0.6446
Epoch 98/100
359/359 [==============================] - 33s 93ms/step - loss: 0.8352 - accuracy: 0.6947 - val_loss: 0.9669 - val_accuracy: 0.6641
Epoch 99/100
359/359 [==============================] - 33s 93ms/step - loss: 0.8419 - accuracy: 0.6941 - val_loss: 1.0753 - val_accuracy: 0.6362
Epoch 100/100
359/359 [==============================] - 34s 94ms/step - loss: 0.8346 - accuracy: 0.6938 - val_loss: 0.9749 - val_accuracy: 0.6613
In [ ]:
fig3 = plt.gcf()
plt.plot(history_4.history['accuracy'])
plt.axis(ymin=0.4,ymax=1)
plt.grid()
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.legend(['train'])
plt.show()

WOW! At least I got something interesting here. I got accuracy around 70% which highest amongst all model I have tried.¶

In [ ]:
fig3 , ax = plt.subplots(1,2)
train_acc = history_4.history['accuracy']
train_loss = history_4.history['loss']
fig2.set_size_inches(12,4)

ax[0].plot(history_4.history['accuracy'])
ax[0].plot(history_4.history['val_accuracy'])
ax[0].set_title('Training Accuracy vs Validation Accuracy')
ax[0].set_ylabel('Accuracy')
ax[0].set_xlabel('Epoch')
ax[0].legend(['Train', 'Validation'], loc='upper left')

ax[1].plot(history_4.history['loss'])
ax[1].plot(history_4.history['val_loss'])
ax[1].set_title('Training Loss vs Validation Loss')
ax[1].set_ylabel('Loss')
ax[1].set_xlabel('Epoch')
ax[1].legend(['Train', 'Validation'], loc='upper left')

plt.show()
In [33]:
pip install pyvirtualdisplay
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting pyvirtualdisplay
  Downloading PyVirtualDisplay-3.0-py3-none-any.whl (15 kB)
Installing collected packages: pyvirtualdisplay
Successfully installed pyvirtualdisplay-3.0
In [34]:
!apt update
!apt install -y xvfb python-opengl ffmpeg

from pyvirtualdisplay import Display
display = Display(visible=0, size=(1, 1))
display.start()
Get:1 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal InRelease [18.1 kB]
Hit:2 http://ppa.launchpad.net/cran/libgit2/ubuntu focal InRelease
Get:3 https://cloud.r-project.org/bin/linux/ubuntu focal-cran40/ InRelease [3,622 B]
Hit:4 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu focal InRelease
Hit:5 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu focal InRelease
Hit:6 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64  InRelease
Hit:7 http://ppa.launchpad.net/ubuntugis/ppa/ubuntu focal InRelease
Get:8 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Hit:9 http://archive.ubuntu.com/ubuntu focal InRelease
Get:10 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:11 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal/main Sources [2,412 kB]
Get:12 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Get:13 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal/main amd64 Packages [1,143 kB]
Get:14 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [2,587 kB]
Get:15 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [1,027 kB]
Get:16 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [2,060 kB]
Get:17 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [2,198 kB]
Get:18 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1,322 kB]
Get:19 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [3,068 kB]
Fetched 16.2 MB in 5s (3,173 kB/s)
Reading package lists... Done
Building dependency tree       
Reading state information... Done
39 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
ffmpeg is already the newest version (7:4.2.7-0ubuntu0.1).
The following additional packages will be installed:
  freeglut3 libpython2-stdlib python2 python2-minimal
Suggested packages:
  python-tk python-numpy libgle3 python2-doc
The following NEW packages will be installed:
  freeglut3 libpython2-stdlib python-opengl python2 python2-minimal xvfb
0 upgraded, 6 newly installed, 0 to remove and 39 not upgraded.
Need to get 1,400 kB of archives.
After this operation, 8,330 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu focal/universe amd64 python2-minimal amd64 2.7.17-2ubuntu4 [27.5 kB]
Get:2 http://archive.ubuntu.com/ubuntu focal/universe amd64 libpython2-stdlib amd64 2.7.17-2ubuntu4 [7,072 B]
Get:3 http://archive.ubuntu.com/ubuntu focal/universe amd64 python2 amd64 2.7.17-2ubuntu4 [26.5 kB]
Get:4 http://archive.ubuntu.com/ubuntu focal/universe amd64 freeglut3 amd64 2.8.1-3 [73.6 kB]
Get:5 http://archive.ubuntu.com/ubuntu focal/universe amd64 python-opengl all 3.1.0+dfsg-2build1 [486 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 xvfb amd64 2:1.20.13-1ubuntu1~20.04.6 [780 kB]
Fetched 1,400 kB in 1s (1,697 kB/s)
Selecting previously unselected package python2-minimal.
(Reading database ... 128285 files and directories currently installed.)
Preparing to unpack .../python2-minimal_2.7.17-2ubuntu4_amd64.deb ...
Unpacking python2-minimal (2.7.17-2ubuntu4) ...
Selecting previously unselected package libpython2-stdlib:amd64.
Preparing to unpack .../libpython2-stdlib_2.7.17-2ubuntu4_amd64.deb ...
Unpacking libpython2-stdlib:amd64 (2.7.17-2ubuntu4) ...
Setting up python2-minimal (2.7.17-2ubuntu4) ...
Selecting previously unselected package python2.
(Reading database ... 128314 files and directories currently installed.)
Preparing to unpack .../python2_2.7.17-2ubuntu4_amd64.deb ...
Unpacking python2 (2.7.17-2ubuntu4) ...
Selecting previously unselected package freeglut3:amd64.
Preparing to unpack .../freeglut3_2.8.1-3_amd64.deb ...
Unpacking freeglut3:amd64 (2.8.1-3) ...
Selecting previously unselected package python-opengl.
Preparing to unpack .../python-opengl_3.1.0+dfsg-2build1_all.deb ...
Unpacking python-opengl (3.1.0+dfsg-2build1) ...
Selecting previously unselected package xvfb.
Preparing to unpack .../xvfb_2%3a1.20.13-1ubuntu1~20.04.6_amd64.deb ...
Unpacking xvfb (2:1.20.13-1ubuntu1~20.04.6) ...
Setting up freeglut3:amd64 (2.8.1-3) ...
Setting up libpython2-stdlib:amd64 (2.7.17-2ubuntu4) ...
Setting up xvfb (2:1.20.13-1ubuntu1~20.04.6) ...
Setting up python2 (2.7.17-2ubuntu4) ...
Setting up python-opengl (3.1.0+dfsg-2build1) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.9) ...
Out[34]:
<pyvirtualdisplay.display.Display at 0x7fb71c0d12b0>

In the last step I am Going to connect Open Cv and used my model which I have saved earlier to use as a Pre-trained model .¶

I have uploaded 4 sec video which I Hvae recoreded to check the accuracy of he model.¶

In [ ]:
import cv2
import numpy as np
from keras.models import load_model
from tensorflow.keras.preprocessing.image import img_to_array
from keras.models import load_model
from keras.preprocessing import image
from time import sleep
from google.colab.patches import cv2_imshow


face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
classifier = load_model('model_1.h5')

emotion_labels = ['Angry','Disgust','Fear','Happy','Neutral', 'Sad', 'Surprise']

cap = cv2.VideoCapture('/content/test_file_1.mp4')

if not cap.isOpened():
    print("Error opening video stream or file")

while True:
    ret, frame = cap.read()
    
    if not ret:
        print("Error reading frame")
        break

    gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)

    faces = face_classifier.detectMultiScale(gray)

    for (x,y,w,h) in faces:
        cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
        roi_gray = gray[y:y+h,x:x+w]
        roi_gray = cv2.resize(roi_gray,(48,48),interpolation=cv2.INTER_AREA)

        if np.sum([roi_gray]) != 0:
            roi = roi_gray.astype('float')/255.0
            roi = img_to_array(roi)
            roi = np.expand_dims(roi,axis=0)

            prediction = classifier.predict(roi)[0]
            label = emotion_labels[prediction.argmax()]
            label_position = (x,y-10)
            cv2.putText(frame,label,label_position,cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,0),2)
        else:
            cv2.putText(frame,'No Faces',(30,80),cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,0),2)

    cv2_imshow(frame)
    
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
1/1 [==============================] - 0s 256ms/step
1/1 [==============================] - 0s 27ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 27ms/step
1/1 [==============================] - 0s 25ms/step
1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 23ms/step
1/1 [==============================] - 0s 26ms/step
1/1 [==============================] - 0s 54ms/step
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 49ms/step
1/1 [==============================] - 0s 57ms/step
1/1 [==============================] - 0s 53ms/step
1/1 [==============================] - 0s 52ms/step
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 29ms/step
1/1 [==============================] - 0s 39ms/step
1/1 [==============================] - 0s 33ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 56ms/step
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - 0s 36ms/step
1/1 [==============================] - 0s 72ms/step
1/1 [==============================] - 0s 33ms/step
1/1 [==============================] - 0s 48ms/step
1/1 [==============================] - 0s 43ms/step
1/1 [==============================] - 0s 71ms/step
1/1 [==============================] - 0s 56ms/step
1/1 [==============================] - 0s 55ms/step
1/1 [==============================] - 0s 58ms/step
1/1 [==============================] - 0s 56ms/step
1/1 [==============================] - 0s 44ms/step
1/1 [==============================] - 0s 57ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 32ms/step
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 37ms/step
1/1 [==============================] - 0s 38ms/step
1/1 [==============================] - 0s 48ms/step
1/1 [==============================] - 0s 86ms/step
1/1 [==============================] - 0s 54ms/step
1/1 [==============================] - 0s 75ms/step
1/1 [==============================] - 0s 78ms/step
1/1 [==============================] - 0s 61ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - ETA: 0s

Guess what! I got so many beautiful results from different time frames of the video.¶

Summary : For the model of Emotion Detection, I tried to increase the accuracy of the model by changing hyper-parameter like learning, batch size, number of epochs, number of dense laywers, pool size. but in the end, after doing Data Augmentation I was able to get accuracy around 70%. Lastly, I tried to detect Emotions from the video by using a Pre-trained model I have created with OpenCv library which showed better results. I like to extend this project from here in "ProjectII" which will be a great start for me.